Studying Human Spatial Navigation Processes Using POMDPs
نویسندگان
چکیده
Humans possess the remarkable ability to navigate through large-scale spaces, such as a building or a city, with remarkable ease and proficiency. The current series of studies uses uses Partially Observable Markov Decision Processes (POMDP) to better understand how humans navigate through large-scale spaces when they have state uncertainty (i.e., lost in a familiar environment.). To investigate this question, we familiarized subjects with a novel, indoor, virtual reality environment. After familiarizing the subject with the environment, we measured subject’s efficiency for navigating from an unspecified location within the environment to a specific goal state. The environments were visually sparse and thus produced a great deal of perceptual aliasing (more than one state produced the same observation). We investigated whether human inefficiency was due to: 1) accessing their cognitive map; 2) Updating their belief vector; or 3) An inefficient decision process. The data clearly show that subjects are limited by an inefficient belief vector updating procedure. We discuss the ramifications of these finding on human way-finding behavior in addition to more general issues associated with decision making with uncertainty.
منابع مشابه
Representing hierarchical POMDPs as DBNs for multi-scale map learning
We explore the advantages of representing hierarchical partially observable Markov decision processes (H-POMDPs) as dynamic Bayesian networks (DBNs). We use this model for representing and learning multi-resolution spatial maps for indoor robot navigation. Our results show that a DBN representation of H-POMDPs can train significantly faster than the original learning algorithm for H-POMDPs or t...
متن کاملComputer Science and Artificial Intelligence Laboratory Spatial and Temporal Abstractions in POMDPs Applied to Robot Navigation
Partially observable Markov decision processes (POMDPs) are a well studied paradigm for programming autonomous robots, where the robot sequentially chooses actions to achieve long term goals efficiently. Unfortunately, for real world robots and other similar domains, the uncertain outcomes of the actions and the fact that the true world state may not be completely observable make learning of mo...
متن کاملSpatial Navigation in Virtual World
Absract One of the most challenging and complex tasks when working with virtual worlds is the navigation process. This chapter presents the possibilities of using neural networks as a tool for studying spatial navigation within virtual worlds. How it learns to predict the next step for a given trajectory, acquiring basic spatial knowledge in terms of landmarks and configuration of spatial layou...
متن کاملCombining Probabilistic Planning and Logic Programming on Mobile Robots
Key challenges to widespread deployment of mobile robots to interact with humans in real-world domains include the ability to: (a) robustly represent and revise domain knowledge; (b) autonomously adapt sensing and processing to the task at hand; and (c) learn from unreliable high-level human feedback. Partially observable Markov decision processes (POMDPs) have been used to plan sensing and nav...
متن کاملQuasi-Deterministic Partially Observable Markov Decision Processes
We study a subclass of POMDPs, called quasi-deterministic POMDPs (QDET-POMDPs), characterized by deterministic actions and stochastic observations. While this framework does not model the same general problems as POMDPs, they still capture a number of interesting and challenging problems and, in some cases, have interesting properties. By studying the observability available in this subclass, w...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004